3,684 research outputs found

    On the Security of the Yi-Tan-Siew Chaos-Based Cipher

    Get PDF
    This paper presents a comprehensive analysis on the security of the Yi-Tan-Siew chaotic cipher proposed in [IEEE TCAS-I 49(12):1826-1829 (2002)]. A differential chosen-plaintext attack and a differential chosen-ciphertext attack are suggested to break the sub-key K, under the assumption that the time stamp can be altered by the attacker, which is reasonable in such attacks. Also, some security Problems about the sub-keys α\alpha and β\beta are clarified, from both theoretical and experimental points of view. Further analysis shows that the security of this cipher is independent of the use of the chaotic tent map, once the sub-key KK is removed via the proposed suggested differential chosen-plaintext attack.Comment: 5 pages, 3 figures, IEEEtrans.cls v 1.

    Quench Dynamics of Topological Maximally-Entangled States

    Full text link
    We investigate the quench dynamics of the one-particle entanglement spectra (OPES) for systems with topologically nontrivial phases. By using dimerized chains as an example, it is demonstrated that the evolution of OPES for the quenched bi-partite systems is governed by an effective Hamiltonian which is characterized by a pseudo spin in a time-dependent pseudo magnetic field S(k,t)\vec{S}(k,t). The existence and evolution of the topological maximally-entangled edge states are determined by the winding number of S(k,t)\vec{S}(k,t) in the kk-space. In particular, the maximally-entangled edge states survive only if nontrivial Berry phases are induced by the winding of S(k,t)\vec{S}(k,t). In the infinite time limit the equilibrium OPES can be determined by an effective time-independent pseudo magnetic field \vec{S}_{\mb{eff}}(k). Furthermore, when maximally-entangled edge states are unstable, they are destroyed by quasiparticles within a characteristic timescale in proportional to the system size.Comment: 5 pages, 3 figure

    A Comparative Study on Regularization Strategies for Embedding-based Neural Networks

    Full text link
    This paper aims to compare different regularization strategies to address a common phenomenon, severe overfitting, in embedding-based neural networks for NLP. We chose two widely studied neural models and tasks as our testbed. We tried several frequently applied or newly proposed regularization strategies, including penalizing weights (embeddings excluded), penalizing embeddings, re-embedding words, and dropout. We also emphasized on incremental hyperparameter tuning, and combining different regularizations. The results provide a picture on tuning hyperparameters for neural NLP models.Comment: EMNLP '1
    corecore